# Named entity recognition
Roberta Medium Amharic
A RoBERTa model specifically designed for Amharic, which solves the problem of insufficient performance in Amharic NLP tasks through pre-training from scratch.
Large Language Model
Transformers Other

R
rasyosef
132
1
FRED T5 Large Instruct V0.1
Apache-2.0
FRED-T5-large-instruct-v0.1 is a Russian text auto-editing and question-answering model based on PyTorch and Transformers, primarily used for various Russian text processing tasks.
Large Language Model
Transformers Other

F
bond005
173
15
Tr Core News Lg
Large-scale Turkish natural language processing pipeline, including tokenization, POS tagging, morphological analysis, lemmatization, dependency parsing, and named entity recognition.
Sequence Labeling Other
T
turkish-nlp-suite
94
8
Bert Base Swedish Cased
Swedish BERT base model released by the National Library of Sweden/KBLab, trained on multi-source texts
Large Language Model Other
B
KB
11.16k
21
Biodivbert
Apache-2.0
BiodivBERT is a domain-specific model based on BERT, specifically designed for biodiversity literature.
Large Language Model
Transformers English

B
NoYo25
49
3
Bert Tiny Chinese Pos
Gpl-3.0
Provides Traditional Chinese transformers models and NLP tools including word segmentation, POS tagging, and named entity recognition.
Sequence Labeling
Transformers Chinese

B
ckiplab
542
2
Ko Core News Lg
CPU-optimized Korean processing pipeline with complete NLP capabilities including tokenization, POS tagging, dependency parsing, named entity recognition, etc.
Sequence Labeling Korean
K
spacy
52
2
Vihealthbert Base Word
ViHealthBERT is a pre-trained language model for Vietnamese health text mining, providing strong baseline performance in the healthcare domain
Large Language Model
Transformers

V
demdecuong
633
5
Roberta Base Wechsel Ukrainian
MIT
This model is a Ukrainian version of roberta-base migrated via the WECHSEL method, demonstrating excellent performance across multiple Ukrainian NLP tasks.
Large Language Model
Transformers Other

R
benjamin
16
0
Lvbert
Apache-2.0
Latvian pre-trained language model based on BERT architecture, suitable for various natural language understanding tasks
Large Language Model
Transformers Other

L
AiLab-IMCS-UL
473
4
Bertoverflow
BERT-base model pretrained on a decade of StackOverflow archive data, specializing in code and named entity recognition tasks.
Sequence Labeling
B
jeniya
114
9
Es Core News Lg
Gpl-3.0
CPU-optimized Spanish processing pipeline, including tokenization, POS tagging, dependency parsing, named entity recognition, etc.
Sequence Labeling Spanish
E
spacy
52
1
Da Dacy Large Trf
Apache-2.0
DaCy is a Danish language processing framework featuring state-of-the-art pipelines and analysis capabilities for Danish.
Sequence Labeling Other
D
chcaa
37
4
Albert Tiny Chinese Pos
Gpl-3.0
A lightweight ALBERT model developed by Academia Sinica's CKIP team, supporting Traditional Chinese NLP tasks
Sequence Labeling
Transformers Chinese

A
ckiplab
2,781
2
Albert Base Chinese Ws
Gpl-3.0
A Traditional Chinese natural language processing model developed by the Academia Sinica CKIP team, based on the ALBERT architecture, supporting tasks such as word segmentation and part-of-speech tagging.
Sequence Labeling
Transformers Chinese

A
ckiplab
1,498
1
Czert B Base Cased
CZERT is a language representation model specifically trained for Czech, outperforming multilingual BERT models on various Czech NLP tasks
Large Language Model
Transformers Other

C
UWB-AIR
560
3
Classical Chinese Punctuation Guwen Biaodian
This model is used to automatically add punctuation marks to unpunctuated classical Chinese texts, supporting over twenty types of punctuation.
Sequence Labeling
Transformers Supports Multiple Languages

C
raynardj
166
24
Albert Tiny Chinese
Gpl-3.0
This is a Traditional Chinese ALBERT model developed by Academia Sinica's CKIP team, suitable for various natural language processing tasks.
Large Language Model
Transformers Chinese

A
ckiplab
773
10
Bert Base Chinese
Gpl-3.0
Traditional Chinese BERT model developed by Academia Sinica CKIP Lab, supporting natural language processing tasks
Large Language Model Chinese
B
ckiplab
81.96k
26
Stanza Tr
Apache-2.0
Stanza is a precise and efficient toolkit for syntactic analysis of multiple human languages. From raw text to syntactic parsing and named entity recognition, Stanza provides state-of-the-art natural language processing models for your chosen language.
Sequence Labeling Other
S
stanfordnlp
9,214
4
De Core News Sm
MIT
CPU-optimized German processing pipeline, including tokenization, POS tagging, morphological analysis, dependency parsing, lemmatization, named entity recognition, etc.
Sequence Labeling German
D
spacy
161
1
Stanza Mr
Apache-2.0
Stanza is a precise and efficient toolkit for syntactic analysis of multiple human languages. From raw text to syntactic parsing and named entity recognition, Stanza brings state-of-the-art natural language processing models to your chosen language.
Sequence Labeling Other
S
stanfordnlp
126
0
Albert Base Chinese
Gpl-3.0
A Traditional Chinese Transformer model developed by the Lexical Knowledge Base Group of Academia Sinica, including architectures such as ALBERT, BERT, GPT2 and natural language processing tools
Large Language Model
Transformers Chinese

A
ckiplab
280
11
Afriberta Large
MIT
AfriBERTa large is a pre-trained multilingual model with approximately 126 million parameters, supporting 11 African languages, suitable for tasks such as text classification and named entity recognition.
Large Language Model
Transformers Other

A
castorini
857
12
Bert Base Chinese Pos
Gpl-3.0
Provides Traditional Chinese transformers models and natural language processing tools
Sequence Labeling Chinese
B
ckiplab
28.58k
16
Albert Base Chinese Pos
Gpl-3.0
Traditional Chinese natural language processing model developed by Academia Sinica's CKIP team, supporting tasks like word segmentation and part-of-speech tagging
Sequence Labeling
Transformers Chinese

A
ckiplab
1,095
1
Ja Core News Lg
spaCy's CPU-optimized Japanese processing pipeline, including tokenization, part-of-speech tagging, dependency parsing, named entity recognition, and more
Sequence Labeling Japanese
J
spacy
53
0
Bert Base Multilingual Cased Finetuned Igbo
This is a BERT model optimized for the Igbo language, obtained by fine-tuning on top of the multilingual BERT, and it performs better than the base multilingual model in Igbo text processing tasks.
Large Language Model
Transformers

B
Davlan
16
1
Umberto Wikipedia Uncased V1
UmBERTo is an Italian language model based on the Roberta architecture, trained using SentencePiece and whole word masking techniques, suitable for various natural language processing tasks.
Large Language Model
Transformers Other

U
Musixmatch
1,079
7
Vibert4news Base Cased
This model is a BERT model trained on over 20GB of Vietnamese news datasets, suitable for tasks such as sentiment analysis, and performs excellently on the AIViVN comment dataset.
Large Language Model
Transformers Other

V
NlpHUST
368
6
Featured Recommended AI Models